Authors

* External authors

Venue

Date

Share

Collaborative Multi-Object Tracking with Conformal Uncertainty Propagation

Sanbao Su*

Songyang Han

Yiming Li*

Zhili Zhang*

Chen Feng*

Caiwen Ding*

Fei Miao*

* External authors

RAL 2024

2024

Abstract

Object detection and multiple object tracking (MOT) are essential components of self-driving systems. Accurate detection and uncertainty quantification are both critical for onboard modules, such as perception, prediction, and planning, to improve the safety and robustness of autonomous vehicles. Collaborative object detection (COD) has been proposed to improve detection accuracy and reduce uncertainty by leveraging the viewpoints of multiple agents. However, little attention has been paid to how to leverage the uncertainty quantification from COD to enhance MOT performance. In this paper, as the first attempt to address this challenge, we design an uncertainty propagation framework called MOT-CUP. Our framework first quantifies the uncertainty of COD through direct modeling and conformal prediction, and propagates this uncertainty information into the motion prediction and association steps. MOT-CUP is designed to work with different collaborative object detectors and baseline MOT algorithms. We evaluate MOT-CUP on V2X-Sim, a comprehensive collaborative perception dataset, and demonstrate a 2% improvement in accuracy and a 2.67x reduction in uncertainty compared to the baselines, e.g. SORT and ByteTrack. In scenarios characterized by high occlusion levels, our MOT-CUP demonstrates a noteworthy 4.01% improvement in accuracy. MOT-CUP demonstrates the importance of uncertainty quantification in both COD and MOT, and provides the first attempt to improve the accuracy and reduce the uncertainty in MOT based on COD through uncertainty propagation. Our code is public on https://coperception.github.io/MOT-CUP/.

Related Publications

What is the Solution for State-Adversarial Multi-Agent Reinforcement Learning?

TMLR, 2024
Songyang Han, Sanbao Su*, Sihong He*, Shuo Han*, Haizhao Yang*, Shaofeng Zou*, Fei Miao*

Various methods for Multi-Agent Reinforcement Learning (MARL) have been developed with the assumption that agents' policies are based on accurate state information. However, policies learned through Deep Reinforcement Learning (DRL) are susceptible to adversarial state pertu…

A Multi-Agent Reinforcement Learning Approach for Safe and Efficient Behavior Planning of Connected Autonomous Vehicles

IEEE T-ITS, 2024
Songyang Han, Shanglin Zhou*, Jiangwei Wang*, Lynn Pepin*, Caiwen Ding*, Jie Fu*, Fei Miao*

The recent advancements in wireless technology enable connected autonomous vehicles (CAVs) to gather information about their environment by vehicle-to-vehicle (V2V) communication. In this work, we design an information-sharing-based multi-agent reinforcement learning (MARL) …

  • HOME
  • Publications
  • Collaborative Multi-Object Tracking with Conformal Uncertainty Propagation

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.